MBI Videos

Bei Wang

  • video photo
    Bei Wang
    In this talk, we will give two examples of how topology could be used as a knob for machine learning. In the first example, topology is used as an interior knob for dimensionality reduction. In particular, via a method called the H-Isomap, topological information is used in combination with landmark Isomap to obtain homology-preserving embeddings of high-dimensional point clouds. In the second example, topology is used as an exterior knob for probing deep learning models. Specifically, we probe a trained deep neural network by studying neuron activations (combinations of neuron firings) with a large set of input images. Using topological summaries, we study the organizational principle behind neuron activations, and how these activations are related within a layer and across layers. Using an interactive tool called TopoAct, we present visual exploration scenarios that provide valuable insights towards learned representations of an image classifier.
  • video photo
    Bei Wang
    The Reeb space, which generalizes the notion of a Reeb graph, is one of the few tools in topological data analysis and visualization suitable for the study of multivariate scientific datasets. First introduced by Edelsbrunner et al., it compresses the components of the level sets of a multivariate mapping and obtains a summary representation of their relationships. A related construction called mapper (Singh et al.), and a special case of the mapper construction called the Joint Contour Net (Carr et al. ) have been shown to be effective in visual analytics. Mapper and JCN are intuitively regarded as discrete approximations of the Reeb space, however without formal proofs or approximation guarantees. An open question has been proposed by Dey et al. as to whether the mapper construction converges to the Reeb space in the limit.
    We are interested in developing the theoretical understanding of the relationship between the Reeb space and its discrete approximations to support its use in practical data analysis. Using tools from category theory, we formally prove the convergence between the Reeb space and mapper in terms of an interleaving distance between their categorical representations. Given a sequence of refined discretizations, we prove that these approximations converge to the Reeb space in the interleaving distance; this also helps to quantify the approximation quality of the discretization at a fixed resolution.

View Videos By